Fixed minimizer behaviour on exceeded iterations#231
Fixed minimizer behaviour on exceeded iterations#231
Conversation
henrikjacobsenfys
left a comment
There was a problem hiding this comment.
Looks good to me apart from a tiny nitpick.
This may also be a good opportunity to make a __repr__ method for the FitResults class?
Good idea! lines = [
f"FitResults(success={self.success}",
f" n_pars={self.n_pars}, n_points={len(self.x)}",
f" chi2={self.chi2:.4g}, reduced_chi={self.reduced_chi:.4g}",
f" n_evaluations={self.n_evaluations}",
f" minimizer={self.minimizer_engine.__name__ if self.minimizer_engine else None}",
] |
damskii9992
left a comment
There was a problem hiding this comment.
Generally looks good. I have some questions/potential changes before approval.
Also, do we want to raise an error or a warning when a fit does not converge withing max_iterations?
| if getattr(results, name, False): | ||
| setattr(results, name, value) | ||
| results.success = not bool(fit_results.flag) | ||
| results.success = fit_results.flag == fit_results.EXIT_SUCCESS |
There was a problem hiding this comment.
Can you explain this? What is fit_results.EXIT_SUCCESS?
There was a problem hiding this comment.
And should we raise a warning if the fit did not succeed?
There was a problem hiding this comment.
In DFO-LS, EXIT_SUCCESS is 0 when “objective is sufficiently small” or “rho has reached rhoend”. Positive codes are warning exits, and negative codes are hard errors.
whether to raise a warning: not for every non-success. Hard failures already become an exception in minimizer_dfo, so warning on those would be redundant. But for warning-style exits that still return results, especially EXIT_MAXFUN_WARNING, adding a UserWarning would be reasonable if we want consistency with BUMPS, which warns when it hits the evaluation budget.
There was a problem hiding this comment.
Added warning and unit test
|
|
||
| if 'Success' not in results.msg: | ||
| raise FitError(f'Fit failed with message: {results.msg}') | ||
| if results.flag in {results.EXIT_SUCCESS, results.EXIT_MAXFUN_WARNING}: |
There was a problem hiding this comment.
Could you also explain this? Maybe add a comment?
| except FitError: | ||
| for key in self._cached_pars.keys(): | ||
| self._cached_pars[key].value = self._cached_pars_vals[key][0] | ||
| raise |
There was a problem hiding this comment.
Codecov says this misses a test :)
| results.y_calc = fit_results.best_fit | ||
| results.y_err = 1 / fit_results.weights | ||
| results.n_evaluations = fit_results.nfev | ||
| results.message = fit_results.message |
There was a problem hiding this comment.
We should probably either raise an error or a warning when max_evaluations have been reached.
|
|
||
| assert result.success is False | ||
| assert result.n_evaluations is not None | ||
| assert result.n_evaluations > 0 |
There was a problem hiding this comment.
Since the fit reached max evaluations, shouldn't this be:
assert result.n_evaluations == 3| assert domain_fit_results.y_calc == 'evaluate' | ||
| assert domain_fit_results.y_err == 'dy' | ||
| assert domain_fit_results.n_evaluations == 7 | ||
| assert domain_fit_results.message == 'Fit stopped: reached maximum evaluations (3)' |
There was a problem hiding this comment.
Huh? max evaluations was 3 but it got 7 iterations? What?
I would prefer to just emit the warning and let the caller check the results object for convergence. |
This adds improvements to the fitting result reporting and evaluation tracking across all minimizer engines, ensuring that the number of function evaluations and relevant messages are consistently captured and propagated.
All minimizer engines (
Bumps,DFO,LMFit) now populaten_evaluations(number of function evaluations) andmessage(status or error message) fields inFitResults, and these fields are propagated in multi-dataset fitting.The
Bumpsminimizer uses a new_EvalCounterwrapper to count function evaluations and detect when the maximum evaluation budget is reached, updating thesuccessflag and message accordingly.The
DFOminimizer now raisesFitErroronly for real failures, and considers both success and max function evaluation warnings as successful fits, improving error reporting and testability.